2 research outputs found

    Immersive interconnected virtual and augmented reality : a 5G and IoT perspective

    Get PDF
    Despite remarkable advances, current augmented and virtual reality (AR/VR) applications are a largely individual and local experience. Interconnected AR/VR, where participants can virtually interact across vast distances, remains a distant dream. The great barrier that stands between current technology and such applications is the stringent end-to-end latency requirement, which should not exceed 20 ms in order to avoid motion sickness and other discomforts. Bringing AR/VR to the next level to enable immersive interconnected AR/VR will require significant advances towards 5G ultra-reliable low-latency communication (URLLC) and a Tactile Internet of Things (IoT). In this article, we articulate the technical challenges to enable a future AR/VR end-to-end architecture, that combines 5G URLLC and Tactile IoT technology to support this next generation of interconnected AR/VR applications. Through the use of IoT sensors and actuators, AR/VR applications will be aware of the environmental and user context, supporting human-centric adaptations of the application logic, and lifelike interactions with the virtual environment. We present potential use cases and the required technological building blocks. For each of them, we delve into the current state of the art and challenges that need to be addressed before the dream of remote AR/VR interaction can become reality

    QoE evaluation in adaptive streaming: enhanced MDT with deep learning

    No full text
    We propose an architecture for performing virtual drive tests for mobile network performance evaluation by facilitating radio signal strength data from user equipment. Our architecture comprises three main components: (i) pattern recognizer that learns a typical (nominal) behavior for application KPIs (key performance indicators); (ii) predictor that maps from network KPIs to application KPIs; (iii) anomaly detector that compares predicted application performance with said typical pattern. To simulate user-traces, we utilize a commercial state-of-the-art network optimization tool, which collects application and network KPIs at different geographical locations at various times of the day, to train an initial learning model. Although the collected data is related to an adaptive video streaming application, the proposed architecture is flexible, autonomous and can be used for other applications. We perform extensive numerical analysis to demonstrate key parameters impacting video quality prediction and anomaly detection. Playback time is shown to be the most important parameter affecting video quality, most likely due to video packet buffering during playback. We additionally observe that network KPIs, which characterize the cellular connection strength, improve QoE (quality of experience) estimation in anomalous cases diverging from the nominal. The efficacy of our approach is demonstrated with a mean-maximum F1-score of 77%
    corecore